有希望的方法来改善气候模型中的云参数化,因此气候预测是使用深度学习与来自Storm-解析模型(SRM)模拟的培训数据结合使用。 ICOSAHEDRAL非静水压(图标)建模框架允许模拟从数值天气预报到气候投影,使其成为开发基于神经网络(NN)的子网比例过程的参数化的理想目标。在图标框架内,我们通过基于逼真的区域和全局图标SRM模拟培训基于NN的云覆盖参数化。我们设置了三种不同类型的NNS,其垂直局部程度不同,它们假设从粗粒粒度大气状态变量诊断云盖。 NNS精确地从粗粒数据中估计子网格尺度云覆盖,该数据具有与其训练数据相似的地理特征。此外,全球培训的NNS可以再现区域SRM仿真的子网格级云覆盖。使用基于游戏理论的可解释性库福芙添加剂解释,我们识别特定湿度和云冰上的过分传播,以及我们基于列的NN不能从全局到区域粗粒度SRM数据完全概括的原因。该解释工具还有助于可视化区域和全球训练的基于列的NNS之间的特征重要性的相似性和差异,并在其云覆盖预测和热力学环境之间揭示了本地关系。我们的结果表明,深度学习的潜力从全球SRMS获得准确但可解释的云覆盖参数化,并表明基于邻域的模型可能是精度和概括性之间的良好折衷。
translated by 谷歌翻译
数据驱动算法,特别是神经网络,可以在高分辨率模拟数据训练时模拟粗辨率气候模型中未解决的过程的影响;然而,当在没有接受培训的条件下评估时,它们通常会进行大规模的概括误差。在这里,我们建议在物理上重新归类机器学习算法的输入和输出,以帮助他们推广到看不见的气候。在三个不同的气候模型中应用了划分级热力学的离线参数化,我们展示了重新划分的或“气候不变”神经网络,使测试气候的准确预测比其培训气候更温暖。此外,“气候不变”神经网络促进了Aquaplanet和地球模拟之间的泛化。通过可视化和归因方法,我们表明与标准机器学习模型相比,“气候不变”算法学到了风暴规模对流,辐射和其天气热力学环境之间的更多地方和强大的关系。总的来说,这些结果表明,将物理知识纳入地球系统过程的数据驱动模型可以提高其在气候制度上概括的一致性和能力。
translated by 谷歌翻译
预计气候变化将增加干旱事件的可能性,对粮食安全的严重影响。与其他自然灾害不同,干旱发病缓慢并取决于各种外部因素,在气候数据中进行干旱检测。与现有的作品相比,依赖于简单的相对干旱指数作为地面真实数据,我们建立了从水文模型获得的土壤湿度指数(SMI)。该指数与植被不充分的水直接相关。鉴于Modis卫星观察的土地利用信息六个月的ERA5 - 土地气候投入数据,我们比较了基于SMI对干旱进行序贯感应偏差的不同型号。我们使用PR-AUC作为评估措施,以考虑阶级的不平衡,并且尽管基于时间的挑战性分裂,但获得了有希望的结果。我们进一步展示了一种消融研究,即该模型保留了它们的预测能力,给出了较粗糙分辨率的输入数据,如气候模型常常遇到的。
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
Training a very deep neural network is a challenging task, as the deeper a neural network is, the more non-linear it is. We compare the performances of various preconditioned Langevin algorithms with their non-Langevin counterparts for the training of neural networks of increasing depth. For shallow neural networks, Langevin algorithms do not lead to any improvement, however the deeper the network is and the greater are the gains provided by Langevin algorithms. Adding noise to the gradient descent allows to escape from local traps, which are more frequent for very deep neural networks. Following this heuristic we introduce a new Langevin algorithm called Layer Langevin, which consists in adding Langevin noise only to the weights associated to the deepest layers. We then prove the benefits of Langevin and Layer Langevin algorithms for the training of popular deep residual architectures for image classification.
translated by 谷歌翻译
Machine learning (ML) models can leak information about users, and differential privacy (DP) provides a rigorous way to bound that leakage under a given budget. This DP budget can be regarded as a new type of compute resource in workloads of multiple ML models training on user data. Once it is used, the DP budget is forever consumed. Therefore, it is crucial to allocate it most efficiently to train as many models as possible. This paper presents the scheduler for privacy that optimizes for efficiency. We formulate privacy scheduling as a new type of multidimensional knapsack problem, called privacy knapsack, which maximizes DP budget efficiency. We show that privacy knapsack is NP-hard, hence practical algorithms are necessarily approximate. We develop an approximation algorithm for privacy knapsack, DPK, and evaluate it on microbenchmarks and on a new, synthetic private-ML workload we developed from the Alibaba ML cluster trace. We show that DPK: (1) often approaches the efficiency-optimal schedule, (2) consistently schedules more tasks compared to a state-of-the-art privacy scheduling algorithm that focused on fairness (1.3-1.7x in Alibaba, 1.0-2.6x in microbenchmarks), but (3) sacrifices some level of fairness for efficiency. Therefore, using DPK, DP ML operators should be able to train more models on the same amount of user data while offering the same privacy guarantee to their users.
translated by 谷歌翻译
Imperfect information games (IIG) are games in which each player only partially observes the current game state. We study how to learn $\epsilon$-optimal strategies in a zero-sum IIG through self-play with trajectory feedback. We give a problem-independent lower bound $\mathcal{O}(H(A_{\mathcal{X}}+B_{\mathcal{Y}})/\epsilon^2)$ on the required number of realizations to learn these strategies with high probability, where $H$ is the length of the game, $A_{\mathcal{X}}$ and $B_{\mathcal{Y}}$ are the total number of actions for the two players. We also propose two Follow the Regularize leader (FTRL) algorithms for this setting: Balanced-FTRL which matches this lower bound, but requires the knowledge of the information set structure beforehand to define the regularization; and Adaptive-FTRL which needs $\mathcal{O}(H^2(A_{\mathcal{X}}+B_{\mathcal{Y}})/\epsilon^2)$ plays without this requirement by progressively adapting the regularization to the observations.
translated by 谷歌翻译
Stochastic Gradient Descent Langevin Dynamics (SGLD) algorithms, which add noise to the classic gradient descent, are known to improve the training of neural networks in some cases where the neural network is very deep. In this paper we study the possibilities of training acceleration for the numerical resolution of stochastic control problems through gradient descent, where the control is parametrized by a neural network. If the control is applied at many discretization times then solving the stochastic control problem reduces to minimizing the loss of a very deep neural network. We numerically show that Langevin algorithms improve the training on various stochastic control problems like hedging and resource management, and for different choices of gradient descent methods.
translated by 谷歌翻译
Neural machine translation (NMT) has become the de-facto standard in real-world machine translation applications. However, NMT models can unpredictably produce severely pathological translations, known as hallucinations, that seriously undermine user trust. It becomes thus crucial to implement effective preventive strategies to guarantee their proper functioning. In this paper, we address the problem of hallucination detection in NMT by following a simple intuition: as hallucinations are detached from the source content, they exhibit encoder-decoder attention patterns that are statistically different from those of good quality translations. We frame this problem with an optimal transport formulation and propose a fully unsupervised, plug-in detector that can be used with any attention-based NMT model. Experimental results show that our detector not only outperforms all previous model-based detectors, but is also competitive with detectors that employ large models trained on millions of samples.
translated by 谷歌翻译
As more and more conversational and translation systems are deployed in production, it is essential to implement and to develop effective control mechanisms guaranteeing their proper functioning and security. An essential component to ensure safe system behavior is out-of-distribution (OOD) detection, which aims at detecting whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, it has received much less attention in text generation. This paper addresses the problem of OOD detection for machine translation and dialog generation from an operational perspective. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection ODD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples that are well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF breaks this curse and achieve good results in OOD detection while increasing performance.
translated by 谷歌翻译